186 research outputs found

    Device and Circuit Architectures for In‐Memory Computing

    Get PDF
    With the rise in artificial intelligence (AI), computing systems are facing new challenges related to the large amount of data and the increasing burden of communication between the memory and the processing unit. In‐memory computing (IMC) appears as a promising approach to suppress the memory bottleneck and enable higher parallelism of data processing, thanks to the memory array architecture. As a result, IMC shows a better throughput and lower energy consumption with respect to the conventional digital approach, not only for typical AI tasks, but also for general‐purpose problems such as constraint satisfaction problems (CSPs) and linear algebra. Herein, an overview of IMC is provided in terms of memory devices and circuit architectures. First, the memory device technologies adopted for IMC are summarized, focusing on both charge‐based memories and emerging devices relying on electrically induced material modification at the chemical or physical level. Then, the computational memory programming and the corresponding device nonidealities are described with reference to offline and online training of IMC circuits. Finally, array architectures for computing are reviewed, including typical architectures for neural network accelerators, content addressable memory (CAM), and novel circuit topologies for general‐purpose computing with low complexity

    Phase change materials in non-volatile storage

    Get PDF
    After revolutionizing the technology of optical data storage, phase change materials are being adopted in non-volatile semiconductor memories. Their success in electronic storage is mostly due to the unique properties of the amorphous state where carrier transport phenomena and thermally-induced phase change cooperate to enable high-speed, low-voltage operation and stable data retention possible within the same material. This paper reviews the key physical properties that make this phase so special, the quantitative framework of cell performance, and the future perspectives of phase-change memory devices at the deep nanoscale

    A 2-transistor/1-resistor artificial synapse capable of communication and stochastic learning in neuromorphic systems

    Get PDF
    Resistive (or memristive) switching devices based on metal oxides find applications in memory, logic and neuromorphic computing systems. Their small area, low power operation, and high functionality meet the challenges of brain-inspired computing aiming at achieving a huge density of active connections (synapses) with low operation power. This work presents a new artificial synapse scheme, consisting of a memristive switch connected to 2 transistors responsible for gating the communication and learning operations. Spike timing dependent plasticity (STDP) is achieved through appropriate shaping of the pre-synaptic and the post synaptic spikes. Experiments with integrated artificial synapses demonstrate STDP with stochastic behavior due to (i) the natural variability of set/reset processes in the nanoscale switch, and (ii) the different response of the switch to a given stimulus depending on the initial state. Experimental results are confirmed by model-based simulations of the memristive switching. Finally, system-level simulations of a 2-layer neural network and a simplified STDP model show random learning and recognition of patterns

    Decision Making by a Neuromorphic Network of Volatile Resistive Switching Memories

    Full text link
    The necessity of having an electronic device working in relevant biological time scales with a small footprint boosted the research of a new class of emerging memories. Ag-based volatile resistive switching memories (RRAMs) feature a spontaneous change of device conductance with a similarity to biological mechanisms. They rely on the formation and self-disruption of a metallic conductive filament through an oxide layer, with a retention time ranging from a few milliseconds to several seconds, greatly tunable according to the maximum current which is flowing through the device. Here we prove a neuromorphic system based on volatile-RRAMs able to mimic the principles of biological decision-making behavior and tackle the Two-Alternative Forced Choice problem, where a subject is asked to make a choice between two possible alternatives not relying on a precise knowledge of the problem, rather on noisy perceptions

    In-memory eigenvector computation in time O(1)

    Get PDF
    In-memory computing with crosspoint resistive memory arrays has gained enormous attention to accelerate the matrix-vector multiplication in the computation of data-centric applications. By combining a crosspoint array and feedback amplifiers, it is possible to compute matrix eigenvectors in one step without algorithmic iterations. In this work, time complexity of the eigenvector computation is investigated, based on the feedback analysis of the crosspoint circuit. The results show that the computing time of the circuit is determined by the mismatch degree of the eigenvalues implemented in the circuit, which controls the rising speed of output voltages. For a dataset of random matrices, the time for computing the dominant eigenvector in the circuit is constant for various matrix sizes, namely the time complexity is O(1). The O(1) time complexity is also supported by simulations of PageRank of real-world datasets. This work paves the way for fast, energy-efficient accelerators for eigenvector computation in a wide range of practical applications.Comment: Accepted by Adv. Intell. Sys

    In-memory computing with resistive switching devices

    Get PDF

    Time complexity of in-memory solution of linear systems

    Get PDF
    In-memory computing with crosspoint resistive memory arrays has been shown to accelerate data-centric computations such as the training and inference of deep neural networks, thanks to the high parallelism endowed by physical rules in the electrical circuits. By connecting crosspoint arrays with negative feedback amplifiers, it is possible to solve linear algebraic problems such as linear systems and matrix eigenvectors in just one step. Based on the theory of feedback circuits, we study the dynamics of the solution of linear systems within a memory array, showing that the time complexity of the solution is free of any direct dependence on the problem size N, rather it is governed by the minimal eigenvalue of an associated matrix of the coefficient matrix. We show that, when the linear system is modeled by a covariance matrix, the time complexity is O(logN) or O(1). In the case of sparse positive-definite linear systems, the time complexity is solely determined by the minimal eigenvalue of the coefficient matrix. These results demonstrate the high speed of the circuit for solving linear systems in a wide range of applications, thus supporting in-memory computing as a strong candidate for future big data and machine learning accelerators.Comment: Accepted by IEEE Trans. Electron Devices. The authors thank Scott Aaronson for helpful discussion about time complexit

    Memtransistor Devices Based on MoS 2 Multilayers with Volatile Switching due to Ag Cation Migration

    Get PDF
    In the recent years, the need for fast, robust, and scalable memory devices have spurred the exploration of advanced materials with unique electrical properties. Among these materials, 2D semiconductors are promising candidates as they combine atomically thin size, semiconductor behavior, and complementary metal-oxide-semiconductor compatibility. Here a three-terminal memtransistor device, based on multilayer MoS2 with ultrashort channel length, that combines the usual transistor behavior of 2D semiconductors with resistive switching memory operation is presented. The volatile switching behavior is explained by the Ag cation migration along the channel surface. An extensive physical and electrical characterization to investigate the fundamental properties of the device, is presented. Finally, a chain-type memory array architecture similar to a NAND flash structure consisting of memtransistors is demonstrated, where the individual memory devices can be selected for write and read, paving the way for high-density, 3D memories based on 2D semiconductors

    Reservoir Computing with Charge-Trap Memory Based on a MoS2 Channel for Neuromorphic Engineering

    Get PDF
    Novel memory devices are essential for developing low power, fast, and accurate in-memory computing and neuromorphic engineering concepts that can compete with the conventional complementary metal-oxide-semiconductor (CMOS) digital processors. 2D semiconductors provide a novel platform for advanced semiconductors with atomic thickness, low-current operation, and capability of 3D integration. This work presents a charge-trap memory (CTM) device with a MoS2 channel where memory operation arises, thanks to electron trapping/detrapping at interface states. Transistor operation, memory characteristics, and synaptic potentiation/depression for neuromorphic applications are demonstrated. The CTM device shows outstanding linearity of the potentiation by applied drain pulses of equal amplitude. Finally, pattern recognition is demonstrated by reservoir computing where the input pattern is applied as a stimulation of the MoS2-based CTMs, while the output current after stimulation is processed by a feedforward readout network. The good accuracy, the low current operation, and the robustness to input random bit flip makes the CTM device a promising technology for future high-density neuromorphic computing concepts
    corecore